9 research outputs found

    Nancy: An efficient parallel Network Calculus library

    Full text link
    This paper describes Nancy, a Network Calculus (NC) library that allows users to perform complex min-plus and max-plus algebra operations efficiently. To the best of our knowledge, Nancy is the only open-source library that implements operations working on arbitrary piecewise affine functions, as well as to implement some of them (e.g. sub-additive closure and function composition). Nancy allows researchers to compute NC results using a straightforward syntax, which matches the algebraic one. Moreover, it is designed having computational efficiency in mind: it exploits optimizations of data structures, it uses inheritance to allow for faster algorithms when they are available (e.g., for specific subclasses of functions), and it is natively parallel, thus reaping the benefit of multicore hardware. This makes it usable to solve NC problems which were previously considered beyond the realm of tractable

    Isospeed: Improving (min,+) Convolution by Exploiting (min,+)/(max,+) Isomorphism (Artifact)

    Get PDF
    (min,+) convolution is the key operation in (min,+) algebra, a theory often used to compute performance bounds in real-time systems. As already observed in many works, its algorithm can be computationally expensive, due to the fact that: i) its complexity is superquadratic with respect to the size of the operands; ii) operands must be extended before starting its computation, and iii) said extension is tied to the least common multiple of the operand periods. In this paper, we leverage the isomorphism between (min,+) and (max,+) algebras to devise a new algorithm for (min,+) convolution, in which the need for operand extension is minimized. This algorithm is considerably faster than the ones known so far, and it allows us to abate the computation times of (min,+) convolution by orders of magnitude

    Isospeed: Improving (min,+) Convolution by Exploiting (min,+)/(max,+) Isomorphism

    Get PDF
    (min,+) convolution is the key operation in (min,+) algebra, a theory often used to compute performance bounds in real-time systems. As already observed in many works, its algorithm can be computationally expensive, due to the fact that: i) its complexity is superquadratic with respect to the size of the operands; ii) operands must be extended before starting its computation, and iii) said extension is tied to the least common multiple of the operand periods. In this paper, we leverage the isomorphism between (min,+) and (max,+) algebras to devise a new algorithm for (min,+) convolution, in which the need for operand extension is minimized. This algorithm is considerably faster than the ones known so far, and it allows us to reduce the computation times of (min,+) convolution by orders of magnitude

    The Road towards Predictable Automotive High-Performance Platforms

    Get PDF
    Due to the trends of centralizing the E/E architecture and new computing-intensive applications, high-performance hardware platforms are currently finding their way into automotive systems. However, the SoCs currently available on the market have significant weaknesses when it comes to providing predictable performance for time-critical applications. The main reason for this is that these platforms are optimized for averagecase performance. This shortcoming represents one major risk in the development of current and future automotive systems. In this paper we describe how high-performance and predictability could (and should) be reconciled in future HW/SW platforms. We believe that this goal can only be reached in a close collaboration between system suppliers, IP providers, semiconductor companies, and OS/hypervisor vendors. Furthermore, academic input will be needed to solve remaining challenges and to further improve initial solutions

    Study and implementation of a Network Calculus tool for worst case performance on Networks-on-Chip

    No full text
    A Network on Chip is the communication system on an integrated circuit that enables the IP cores to exchange data with each other. Inspired by large-scale network technologies, they are adapted to different requirements of integrated systems of area coverage, end-to-end delay and bandwidth. With the diffusion mobile technologies and their increasing requirements of miniaturization and performance, it's increasingly difficult to design ad-hoc NoCs. Network Calculus is a mathematical framework analysing performance guarantees in computer networks. While successfully applied for the internet, its models are too simple to represent the NoC routing technologies, in particular wormhole routing, and its congestion issues. To aid design of NoCs, in this work we show how to model wormhole networks using Network Calculus as a series of FCFS buffers, and how these models were used to develop an integrated tool to study worst-case end-to-end delay on a wormhole network

    Computationally efficient worst-case analysis of flow-controlled networks with Network Calculus

    Full text link
    Networks with hop-by-hop flow control occur in several contexts, from data centers to systems architectures (e.g., wormhole-routing networks on chip). A worst-case end-to-end delay in such networks can be computed using Network Calculus (NC), an algebraic theory where traffic and service guarantees are represented as curves in a Cartesian plane. NC uses transformation operations, e.g., the min-plus convolution, to model how the traffic profile changes with the traversal of network nodes. NC allows one to model flow-controlled systems, hence one can compute the end-to-end service curve describing the minimum service guaranteed to a flow traversing a tandem of flow-controlled nodes. However, while the algebraic expression of such an end-to-end service curve is quite compact, its computation is often untractable from an algorithmic standpoint: data structures tend to explode, making operations unfeasibly complex, even with as few as three hops. In this paper, we propose computational and algebraic techniques to mitigate the above problem. We show that existing techniques (such as reduction to compact domains) cannot be used in this case, and propose an arsenal of solutions, which include methods to mitigate the data representation space explosion as well as computationally efficient algorithms for the min-plus convolution operation. We show that our solutions allow a significant speedup, enable analysis of previously unfeasible case studies, and - since they do not rely on any approximation - still provide exact results.Comment: 34 pages, 28 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Extending the Network Calculus Algorithmic Toolbox for Ultimately Pseudo-Periodic Functions: Pseudo-Inverse and Composition

    Full text link
    Network Calculus (NC) is an algebraic theory that represents traffic and service guarantees as curves in a Cartesian plane, in order to compute performance guarantees for flows traversing a network. NC uses transformation operations, e.g., min-plus convolution of two curves, to model how the traffic profile changes with the traversal of network nodes. Such operations, while mathematically well-defined, can quickly become unmanageable to compute using simple pen and paper for any non-trivial case, hence the need for algorithmic descriptions. Previous work identified the class of piecewise affine functions which are ultimately pseudo-periodic (UPP) as being closed under the main NC operations and able to be described finitely. Algorithms that embody NC operations taking as operands UPP curves have been defined and proved correct, thus enabling software implementations of these operations. However, recent advancements in NC make use of operations, namely the lower pseudo-inverse, upper pseudo-inverse, and composition, that are well defined from an algebraic standpoint, but whose algorithmic aspects have not been addressed yet. In this paper, we introduce algorithms for the above operations when operands are UPP curves, thus extending the available algorithmic toolbox for NC. We discuss the algorithmic properties of these operations, providing formal proofs of correctness.Comment: Preprint submitted to Journal of Discrete Event Dynamic System

    Heterogeneous systems modelling with Adaptive Traffic Profiles and its application to worst-case analysis of a DRAM controller

    No full text
    Computing Systems are evolving towards more complex, hetero-geneous systems where multiple computing cores and accelerators on the same system concur to improve computing resources utili-zation, resources re-use and the efficiency of data sharing across workloads. Such complex systems require equally complex tools and models to design and engineer them so that their use-case requirements can be satisfied. Adaptive Traffic Profiles (ATP) introduce a fast prototyping technology, which allows one to model the dynamic memory behavior of computer system devices when executing their workloads. ATP defines a standard file format and comes with an open source transaction generator en-gine written in C++. Both ATP files and the engine are portable and pluggable to different host platforms, to allow workloads to be assessed with various models at different levels of abstraction. We present here the ATP technology developed at Arm and pub-lished in [5]. We present a case-study involving the usage of ATP, namely the analysis of the worst-case latency at a DRAM control-ler, which is assessed via two separate toolchains, both using traf-fic modelling encoded in ATP

    A MILP Approach to DRAM Access Worst-Case Analysis

    No full text
    The Dynamic Random Access Memory (DRAM) is among the major points of contention in multi-core systems. We consider a challenging optimization problem arising in worstcase performance analysis of systems architectures: computing the worst-case delay (WCD) experienced when accessing the DRAM due to the interference of contending requests. The WCD is a crucial input for micro-architectural design of systems with reliable end-to-end performance guarantees, which is required in many applications, such as when strict realtime requirements must be imposed. The problem can be modeled as a mixed integer linear program (MILP), for which standard MILP software struggles to solve even small instances. Using a combination of upper and lower scenario bounding, we show how to solve realistic instances in a matter of few minutes. A novel ingredient of our approach, with respect to other WCD analysis techniques, is the possibility of computing the exact WCD rather than an upper bound, as well as providing the corresponding scenario, which represents crucial information for future memory design improvements
    corecore